# Natural language understanding
Rouwei 0.6
RouWei-0.6 is a text-to-image model fine-tuned extensively on Illustrious-xl-early-release-v0, specializing in anime-style image generation with exceptional prompt-following capability and aesthetic performance.
Image Generation English
R
Minthy
36
3
Llama 3.1 Centaur 70B
Llama-3.1-Centaur-70B is a cognitive foundation model capable of predicting and simulating human behavior in any behavioral experiment described in natural language.
Large Language Model
Transformers

L
marcelbinz
1,825
20
Codenlbert Tiny
MIT
BERT-small based model for classifying code and natural language with over 99% accuracy
Text Classification
Transformers Supports Multiple Languages

C
vishnun
68
2
Zero Shot Vanilla Binary Bert
MIT
This is a BERT-based zero-shot text classification model, specifically designed for zero-shot classification tasks, trained using the aspect-normalized UTCD dataset.
Text Classification
Transformers English

Z
claritylab
26
0
Bert Finetuned On Nq Short
An open-domain question answering model trained on the complete Natural Questions (NQ) dataset, capable of answering various factual questions
Large Language Model
Transformers

B
eibakke
13
1
Legacy1
BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained language model based on the Transformer architecture, developed by Google.
Large Language Model
Transformers

L
khoon485
19
0
Erlangshen DeBERTa V2 710M Chinese
Apache-2.0
This is a 710M parameter DeBERTa-v2 model focused on Chinese natural language understanding tasks. It is pre-trained using the whole-word masking method, providing strong support for the Chinese NLP field.
Large Language Model
Transformers Chinese

E
IDEA-CCNL
246
13
Distilbart Mnli 12 9
DistilBart-MNLI is a lightweight version distilled from bart-large-mnli using teacher-free distillation technology, maintaining high accuracy while reducing model complexity.
Text Classification
D
valhalla
8,343
12
Indobert Lite Large P2
MIT
IndoBERT is an advanced language model based on BERT, specifically designed for Indonesian, trained with masked language modeling and next sentence prediction objectives.
Large Language Model
Transformers Other

I
indobenchmark
117
1
Distilbart Mnli 12 6
DistilBart-MNLI is a distilled version of BART-large-MNLI, using teacher-free distillation technology, significantly reducing model size while maintaining high performance.
Text Classification
D
valhalla
49.63k
11
Distilbart Mnli 12 3
DistilBart-MNLI is a distilled version of bart-large-mnli using teacher-free distillation techniques, achieving performance close to the original model while being more lightweight.
Text Classification
D
valhalla
8,791
19
Bert Base Chinese
Gpl-3.0
Traditional Chinese BERT model developed by Academia Sinica CKIP Lab, supporting natural language processing tasks
Large Language Model Chinese
B
ckiplab
81.96k
26
Indobert Large P1
MIT
IndoBERT is an advanced Indonesian language model based on the BERT model, trained with masked language modeling and next - sentence prediction objectives.
Large Language Model Other
I
indobenchmark
1,686
4
Indobert Base P2
MIT
IndoBERT is the state - of - the - art Indonesian language model based on the BERT model, trained through masked language modeling and next - sentence prediction objectives.
Large Language Model Other
I
indobenchmark
25.89k
5
Deberta V3 Xsmall Squad2
DeBERTa v3 xsmall is an improved natural language understanding model developed by Microsoft, which enhances performance through decoupled attention mechanisms and enhanced masked decoders, surpassing RoBERTa in multiple NLU tasks.
Question Answering System
Transformers English

D
nbroad
17
0
Albert Base Chinese
Gpl-3.0
A Traditional Chinese Transformer model developed by the Lexical Knowledge Base Group of Academia Sinica, including architectures such as ALBERT, BERT, GPT2 and natural language processing tools
Large Language Model
Transformers Chinese

A
ckiplab
280
11
Chinese Macbert Large
Apache-2.0
MacBERT is an improved Chinese BERT model that employs M as a corrective masked language model pre-training task, alleviating the inconsistency between pre-training and fine-tuning stages.
Large Language Model Chinese
C
hfl
13.05k
42
Deberta V3 Large
MIT
DeBERTaV3 improves upon DeBERTa with ELECTRA-style pre-training and gradient-disentangled embedding sharing techniques, excelling in natural language understanding tasks
Large Language Model
Transformers English

D
microsoft
343.39k
213
Deberta V2 Xlarge Mnli
MIT
DeBERTa-v3 XL is a large-scale pre-trained language model based on the Transformer architecture, specifically fine-tuned for natural language inference tasks.
Large Language Model
Transformers English

D
NDugar
38
0
Xlm Roberta Large En Ru Mnli
A fine-tuned version based on the xlm-roberta-large-en-ru model on the MNLI dataset, supporting natural language inference tasks in English and Russian.
Text Classification
Transformers Supports Multiple Languages

X
DeepPavlov
105
2
DDDC
This is a versatile large language model capable of understanding and generating natural language text
Large Language Model
Transformers

D
Biasface
16
0
Featured Recommended AI Models